Never Before Heard Sounds

image.png
AI Song Contest 2021 / participants

TEAM / Never Before Heard Sounds
SONG / Features Music
TEAM MEMBERS /
Yotam Mann, Chris Deaner, James Baluyut

LISTEN AND EVALUATE Features Music

The lines are now closed. You can no longer vote. Watch the Award Ceremony on July 6 to find out which song wins the AI Song Contest 2021!

 

MORE ABOUT THIS ENTRY BELOW

 
 

ABOUT THE TEAM

We are a group of musicians and programmers searching for new ways of making music with machine learning. Chris and James have been making music together for over 20 years. Yotam and Chris have been building software instruments for over 7 years. This collaboration is the first time the three of us have written music together using an instrument that we’ve created.

The instrument is a custom machine learning model, trained on recordings that we’ve carefully sourced from permissively-licensed datasets or recorded ourselves. The model resynthesizes any input audio using a neural net trained on vocal and instrumental recordings. The result is an AI interpretation of the audio which retains the pitches and rhythms of the original but adds the texture and timbre learned from the training set.

ABOUT THE SONG

We began composing this song under the constraint that each track should be performed by us (no samples or loops) and transformed through our AI instrument. We followed our original plan with one notable exception: the unaltered drum set track which we felt like kept the energy that we were looking for. All other sounds are passed through our instrument and every sound in the song was performed and recorded by us in a multi-hour session in a rehearsal space in DUMBO Brooklyn. We mixed the song over the next few days.

Players and input instruments:
Yotam Mann: Vocals, Vibes, Synth
Chris Deaner: Vocals, Drum set, Vibes
James Baluyut: Guitar

instrument.jpg

ABOUT THE HUMAN-AI CO-CREATION PROCESS

Every sound on the track (except for the drums) has been processed using a brand new instrument that we’ve been developing over the past year with the goal of creating never before heard sounds. The instrument takes any audio as input and resynthesizes it using the textures and timbres learned from a dataset of instrumental or vocal recordings. The output is imperfect and idiosyncratic, and produces unexpected (and sometimes bizarre) combinations of the model's training dataset and our input performance.

The models we used on this track are:
* Choir model, trained on unaccompanied church choirs
* String Quartet
* Alto and Tenor solo voices from the VocalSet dataset
* 8bit model trained on Nintendo game scores.
* Guitar model trained on datasets contributed by James Baluyut.

The process of making each track was to record a part in the studio and then upload it to a server which does the audio processing. We would then often try out a few different models on the track and tweak the settings until we found one that worked for the part. We then downloaded the transformed audio and mixed it in Ableton Live.

We also are working on a real-time version of the instrument, though we did not use it on this track because it’s still under active development.

Previous
Previous

Musicians Meeseeks

Next
Next

Nobel Yoo